Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 5 de 5
Filter
1.
Animals (Basel) ; 13(9)2023 Apr 27.
Article in English | MEDLINE | ID: covidwho-2316680

ABSTRACT

The H9N2 avian influenza virus has become one of the dominant subtypes of avian influenza virus in poultry and has been significantly harmful to chickens in China, with great economic losses in terms of reduced egg production or high mortality by co-infection with other pathogens. A prediction of H9N2 status based on easily available production data with high accuracy would be important and essential to prevent and control H9N2 outbreaks in advance. This study developed a machine learning framework based on the XGBoost classification algorithm using 3 months' laying rates and mortalities collected from three H9N2-infected laying hen houses with complete onset cycles. A framework was developed to automatically predict the H9N2 status of individual house for future 3 days (H9N2 status + 0, H9N2 status + 1, H9N2 status + 2) with five time frames (day + 0, day - 1, day - 2, day - 3, day - 4). It had been proven that a high accuracy rate > 90%, a recall rate > 90%, a precision rate of >80%, and an area under the curve of the receiver operator characteristic ≥ 0.85 could be achieved with the prediction models. Models with day + 0 and day - 1 were highly recommended to predict H9N2 status + 0 and H9N2 status + 1 for the direct or auxiliary monitoring of its occurrence and development. Such a framework could provide new insights into predicting H9N2 outbreaks, and other practical potential applications to assist in disease monitor were also considerable.

2.
16th IEEE International Conference on Intelligent Systems and Knowledge Engineering, ISKE 2021 ; : 458-463, 2021.
Article in English | Scopus | ID: covidwho-1846124

ABSTRACT

As COVID-19 continues to spread around the world, and non-pharmacological interventions (NPIs) continue to be strengthened, the impact of COVID-19 on the film industry has not yet been clearly quantified. In this study, the Difference-in-Difference model is used to quantify the impact of the COVID-19 pandemic on the box office. Results indicate that the COVID-19 pandemic has a significant negative effect on the daily global box office. Additionally, based on a research dataset containing information on movies and COVID-19, ten machine learning methods were used to build a prediction model of the cumulative global box office. The experimental results showed that Extremely Randomized Trees had the best predictive performance, and it was found that COVID-19 features helped improve the predictive performance of several models. © 2021 IEEE.

3.
21st IEEE International Conference on Data Mining Workshops, ICDMW 2021 ; 2021-December:517-526, 2021.
Article in English | Scopus | ID: covidwho-1730932

ABSTRACT

COVID-19 has been a public health emergency of international concern since early 2020. Reliable forecasting is critical to diminish the impact of this disease. To date, a large number of different forecasting models have been proposed, mainly including statistical models, compartmental models, and deep learning models. However, due to various uncertain factors across different regions such as economics and government policy, no forecasting model appears to be the best for all scenarios. In this paper, we perform quantitative analysis of COVID-19 forecasting of confirmed cases and deaths across different regions in the United States with different forecasting horizons, and evaluate the relative impacts of the following three dimensions on the predictive performance (improvement and variation) through different evaluation metrics: model selection, hyperparameter tuning, and the length of time series required for training. We find that if a dimension brings about higher performance gains, if not well-tuned, it may also lead to harsher performance penalties. Furthermore, model selection is the dominant factor in determining the predictive performance. It is responsible for both the largest improvement and the largest variation in performance in all prediction tasks across different regions. While practitioners may perform more complicated time series analysis in practice, they should be able to achieve reasonable results if they have adequate insight into key decisions like model selection. © 2021 IEEE.

4.
13th International Conference on Bioinformatics and Biomedical Technology, ICBBT 2021 ; : 108-115, 2021.
Article in English | Scopus | ID: covidwho-1596247

ABSTRACT

Proper identification of biomarkers, used in the development of drugs, is critical as has been shown with the race to find a vaccine for the Covid19. Gene-expression based marker discovery often entails that feature selection be performed. However, a plethora of feature selection methods exist and they do not result in the selection of the same feature subsets for the same dataset. Often, users are faced with having to select which subset to use. To help in this conundrum, several approaches have been proposed to guide feature subset selection, among which the use of ensemble methods (i.e., combining subsets from multiple methods) has gained attention recently. In an ensemble approach there are two issues that deserve attention: the stability of the feature subsets being combined and the classification performance of the combined feature subsets. Hence the interest in exploring how stability and performance relate, which is the central topic investigated in this paper. First 5/6 different feature selection methods are used to create feature subsets for 3 different transcriptomics datasets. Then, the stability and performance of these feature subsets under a given merging strategy are computed using 5 stability metrics and 3 performance metrics for 3 different classifiers. Our results suggest that performance and stability criteria are complementary and conflicting and that both must be considered to decide on the final selected feature subsets. We use two reference metrics to illustrate such selection. © 2021 ACM.

5.
Int J Gen Med ; 14: 4711-4721, 2021.
Article in English | MEDLINE | ID: covidwho-1378148

ABSTRACT

PURPOSE: We sought to explore the prognostic value of blood urea nitrogen (BUN) to serum albumin ratio (BAR) and further develop a prediction model for critical illness in COVID-19 patients. PATIENTS AND METHODS: This was a retrospective, multicenter, observational study on adult hospitalized COVID-19 patients from three provinces in China between January 14 and March 9, 2020. Primary outcome was critical illness, including admission to the intensive care unit (ICU), need for invasive mechanical ventilation (IMV), or death. Clinical data were collected within 24 hours after admission to hospitals. The predictive performance of BAR was tested by multivariate logistic regression analysis and receiver operating characteristic (ROC) curve and then a nomogram was developed. RESULTS: A total of 1370 patients with COVID-19 were included and 113 (8.2%) patients eventually developed critical illness in the study. Baseline age (OR: 1.031, 95% CI: 1.014, 1.049), respiratory rate (OR: 1.063, 95% CI: 1.009, 1.120), unconsciousness (OR: 40.078, 95% CI: 5.992, 268.061), lymphocyte counts (OR: 0.352, 95% CI: 0.204, 0.607), total bilirubin (OR: 1.030, 95% CI: 1.001, 1.060) and BAR (OR: 1.319, 95% CI: 1.183, 1.471) were independent risk factors for critical illness. The predictive AUC of BAR was 0.821 (95% CI: 0.784, 0.858; P<0.01) and the optimal cut-off value of BAR was 3.7887 mg/g (sensitivity: 0.690, specificity: 0.786; positive predictive value: 0.225, negative predictive value: 0.966; positive likelihood ratio: 3.226, negative likelihood ratio: 0.394). The C index of nomogram including above six predictors was 0.9031125 (95% CI: 0.8720542, 0.9341708). CONCLUSION: Elevated BAR at admission is an independent risk factor for critical illness of COVID-19. The novel predictive nomogram including BAR has superior predictive performance.

SELECTION OF CITATIONS
SEARCH DETAIL